76 research outputs found

    Why Do Hearing Aids Fail to Restore Normal Auditory Perception?

    Get PDF
    Hearing loss is a widespread condition that is linked to declines in quality of life and mental health. Hearing aids remain the treatment of choice, but, unfortunately, even state-of-the-art devices provide only limited benefit for the perception of speech in noisy environments. While traditionally viewed primarily a loss of sensitivity, it is now clear that hearing loss has additional effects that cause complex distortions of sound-evoked neural activity that cannot be corrected by amplification alone. Here we describe the effects of hearing loss on neural activity in order to illustrate the reasons why current hearing aids are insufficient and to motivate the use of new technologies to explore directions for improving the next generation of devices

    The neural representation of interaural time differences in gerbils is transformed from midbrain to cortex

    Get PDF
    Interaural time differences (ITDs) are the dominant cue for the localization of low-frequency sounds. While much is known about the processing of ITDs in the auditory brainstem and midbrain, there have been relatively few studies of ITD processing in auditory cortex. In this study, we compared the neural representation of ITDs in the inferior colliculus (IC) and primary auditory cortex (A1) of gerbils. Our IC results were largely consistent with previous studies, with most cells responding maximally to ITDs that correspond to the contralateral edge of the physiological range. In A1, however, we found that preferred ITDs were distributed evenly throughout the physiological range without any contralateral bias. This difference in the distribution of preferred ITDs in IC and A1 had a major impact on the coding of ITDs at the population level: while a labeled-line decoder that considered the tuning of individual cells performed well on both IC and A1 responses, a two-channel decoder based on the overall activity in each hemisphere performed poorly on A1 responses relative to either labeledline decoding of A1 responses or two-channel decoding of IC responses. These results suggest that the neural representation of ITDs in gerbils is transformed from IC to A1 and have important implications for how spatial location may be combined with other acoustic features for the analysis of complex auditory scenes

    Periodotopy in the gerbil inferior colliculus: local clustering rather than a gradient map

    Get PDF
    Periodicities in sound waveforms are widespread, and shape important perceptual attributes of sound including rhythm and pitch. Previous studies have indicated that, in the inferior colliculus (IC), a key processing stage in the auditory midbrain, neurons tuned to different periodicities might be arranged along a periodotopic axis which runs approximately orthogonal to the tonotopic axis. Here we map out the topography of frequency and periodicity tuning in the IC of gerbils in unprecedented detail, using pure tones and different periodic sounds, including click trains, sinusoidally amplitude modulated (SAM) noise and iterated rippled noise. We found that while the tonotopic map exhibited a clear and highly reproducible gradient across all animals, periodotopic maps varied greatly across different types of periodic sound and from animal to animal. Furthermore, periodotopic gradients typically explained only about 10% of the variance in modulation tuning between recording sites. However, there was a strong local clustering of periodicity tuning at a spatial scale of ca. 0.5 mm, which also differed from animal to animal

    Compression and amplification algorithms in hearing aids impair the selectivity of neural responses to speech

    Get PDF
    In quiet environments, hearing aids improve the perception of low-intensity sounds. However, for high-intensity sounds in background noise, the aids often fail to provide a benefit to the wearer. Here, using large-scale single-neuron recordings from hearing-impaired gerbilsβ€”an established animal model of human hearingβ€”we show that hearing aids restore the sensitivity of neural responses to speech, but not their selectivity. Rather than reflecting a deficit in supra-threshold auditory processing, the low selectivity is a consequence of hearing-aid compression (which decreases the spectral and temporal contrasts of incoming sound) and amplification (which distorts neural responses, regardless of whether hearing is impaired). Processing strategies that avoid the trade-off between neural sensitivity and selectivity should improve the performance of hearing aids

    State-dependent population coding in primary auditory cortex.

    Get PDF
    Sensory function is mediated by interactions between external stimuli and intrinsic cortical dynamics that are evident in the modulation of evoked responses by cortical state. A number of recent studies across different modalities have demonstrated that the patterns of activity in neuronal populations can vary strongly between synchronized and desynchronized cortical states, i.e., in the presence or absence of intrinsically generated up and down states. Here we investigated the impact of cortical state on the population coding of tones and speech in the primary auditory cortex (A1) of gerbils, and found that responses were qualitatively different in synchronized and desynchronized cortical states. Activity in synchronized A1 was only weakly modulated by sensory input, and the spike patterns evoked by tones and speech were unreliable and constrained to a small range of patterns. In contrast, responses to tones and speech in desynchronized A1 were temporally precise and reliable across trials, and different speech tokens evoked diverse spike patterns with extremely weak noise correlations, allowing responses to be decoded with nearly perfect accuracy. Restricting the analysis of synchronized A1 to activity within up states yielded similar results, suggesting that up states are not equivalent to brief periods of desynchronization. These findings demonstrate that the representational capacity of A1 depends strongly on cortical state, and suggest that cortical state should be considered as an explicit variable in all studies of sensory processing

    An Overrepresentation of High Frequencies in the Mouse Inferior Colliculus Supports the Processing of Ultrasonic Vocalizations

    Get PDF
    Mice are of paramount importance in biomedical research and their vocalizations are a subject of interest for researchers across a wide range of health-related disciplines due to their increasingly important value as a phenotyping tool in models of neural, speech and language disorders. However, the mechanisms underlying the auditory processing of vocalizations in mice are not well understood. The mouse audiogram shows a peak in sensitivity at frequencies between 15-25 kHz, but weaker sensitivity for the higher ultrasonic frequencies at which they typically vocalize. To investigate the auditory processing of vocalizations in mice, we measured evoked potential, single-unit, and multi-unit responses to tones and vocalizations at three different stages along the auditory pathway: the auditory nerve and the cochlear nucleus in the periphery, and the inferior colliculus in the midbrain. Auditory brainstem response measurements suggested stronger responses in the midbrain relative to the periphery for frequencies higher than 32 kHz. This result was confirmed by single- and multi-unit recordings showing that high ultrasonic frequency tones and vocalizations elicited responses from only a small fraction of cells in the periphery, while a much larger fraction of cells responded in the inferior colliculus. These results suggest that the processing of communication calls in mice is supported by a specialization of the auditory system for high frequencies that emerges at central stations of the auditory pathway

    Midbrain adaptation may set the stage for the perception of musical beat

    Get PDF
    The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat

    Timing Precision in Population Coding of Natural Scenes in the Early Visual System

    Get PDF
    The timing of spiking activity across neurons is a fundamental aspect of the neural population code. Individual neurons in the retina, thalamus, and cortex can have very precise and repeatable responses but exhibit degraded temporal precision in response to suboptimal stimuli. To investigate the functional implications for neural populations in natural conditions, we recorded in vivo the simultaneous responses, to movies of natural scenes, of multiple thalamic neurons likely converging to a common neuronal target in primary visual cortex. We show that the response of individual neurons is less precise at lower contrast, but that spike timing precision across neurons is relatively insensitive to global changes in visual contrast. Overall, spike timing precision within and across cells is on the order of 10 ms. Since closely timed spikes are more efficient in inducing a spike in downstream cortical neurons, and since fine temporal precision is necessary to represent the more slowly varying natural environment, we argue that preserving relative spike timing at a similar to 10-ms resolution is a crucial property of the neural code entering cortex

    Harnessing the power of artificial intelligence to transform hearing healthcare and research

    Get PDF
    The advances in artificial intelligence that are transforming many fields have yet to make an impact in hearing. Hearing healthcare continues to rely on a labour-intensive service model that fails to provide access to the majority of those in need, while hearing research suffers from a lack of computational tools with the capacity to match the complexities of auditory processing. This Perspective is a call for the artificial intelligence and hearing communities to come together to bring about a technological revolution in hearing. We describe opportunities for rapid clinical impact through the application of existing technologies and propose directions for the development of new technologies to create true artificial auditory systems. There is an urgent need to push hearing towards a future in which artificial intelligence provides critical support for the testing of hypotheses, the development of therapies and the effective delivery of care worldwide

    Estimating Receptive Fields from Responses to Natural Stimuli with Asymmetric Intensity Distributions

    Get PDF
    The reasons for using natural stimuli to study sensory function are quickly mounting, as recent studies have revealed important differences in neural responses to natural and artificial stimuli. However, natural stimuli typically contain strong correlations and are spherically asymmetric (i.e. stimulus intensities are not symmetrically distributed around the mean), and these statistical complexities can bias receptive field (RF) estimates when standard techniques such as spike-triggered averaging or reverse correlation are used. While a number of approaches have been developed to explicitly correct the bias due to stimulus correlations, there is no complementary technique to correct the bias due to stimulus asymmetries. Here, we develop a method for RF estimation that corrects reverse correlation RF estimates for the spherical asymmetries present in natural stimuli. Using simulated neural responses, we demonstrate how stimulus asymmetries can bias reverse-correlation RF estimates (even for uncorrelated stimuli) and illustrate how this bias can be removed by explicit correction. We demonstrate the utility of the asymmetry correction method under experimental conditions by estimating RFs from the responses of retinal ganglion cells to natural stimuli and using these RFs to predict responses to novel stimuli
    • …
    corecore